越来越多的科学发现需要复杂而可扩展的工作流程。工作流程已成为``新应用程序'',其中多尺度计算活动包括多个和异构的可执行任务。特别是,将AI/ML模型引入传统的HPC工作流程已成为高度准确建模的推动力,与传统方法相比,通常会减少计算需求。本章将讨论将AI/ML模型集成到HPC计算的各种模式,从而导致不同类型的AI耦合HPC工作流程。激励了跨科学领域的AI/ML和HPC耦合的需求越来越多,然后以每种模式的许多生产级用例来体现。我们还讨论了极端尺度AI耦合的HPC广告系列的主要挑战 - 任务异质性,适应性,性能 - 以及旨在解决这些问题的几种框架和中间件解决方案。尽管HPC工作流程和AI/ML计算范例都是独立有效的,但我们强调了它们的整合和最终收敛如何导致一系列领域的科学性能的显着改善,最终导致了科学探索,否则就无法实现。
translated by 谷歌翻译
异质的科学工作流程包括许多类型的任务和依赖性。能够在异质平台上安排和提交不同任务类型的中间件必须允许对任务的异步执行,以改善资源利用,任务吞吐量和减少MakePAN。在本文中,我们介绍了一类重要的异构工作流程,即AI驱动的HPC工作流程,以调查异步任务执行要求和属性。我们对任意工作流程允许的异步性度进行了建模,并提出了关键指标,这些指标可用于确定使用异步执行时的定性利益。我们的实验代表了重要的科学驱动因素,在峰会上进行了大规模进行,并且由于异步执行而引起的性能增强与我们的模型一致。
translated by 谷歌翻译
The need for efficient computational screening of molecular candidates that possess desired properties frequently arises in various scientific and engineering problems, including drug discovery and materials design. However, the large size of the search space containing the candidates and the substantial computational cost of high-fidelity property prediction models makes screening practically challenging. In this work, we propose a general framework for constructing and optimizing a virtual screening (HTVS) pipeline that consists of multi-fidelity models. The central idea is to optimally allocate the computational resources to models with varying costs and accuracy to optimize the return-on-computational-investment (ROCI). Based on both simulated as well as real data, we demonstrate that the proposed optimal HTVS framework can significantly accelerate screening virtually without any degradation in terms of accuracy. Furthermore, it enables an adaptive operational strategy for HTVS, where one can trade accuracy for efficiency.
translated by 谷歌翻译
基于机器学习(ML)的转向可以通过在线选择更科学意义的计算来提高基于合奏的模拟的性能。我们提出了DeepDrivemd,这是ML驱动的科学模拟转向的框架,我们用来通过在大型平行计算机上的有效耦合ML和HPC来实现分子动力学(MD)性能的稳定性提高。我们讨论了DeepDrivemd的设计,并描述了其性能。我们证明,与其他方法相对于其他方法,DeepDrivemd可以在100-1000倍加速度之间达到100-1000倍的加速度,这是通过执行的模拟时间量来衡量的,同时覆盖了模拟过程中采样的状态所量化的相同构象景观。实验是在最多1020个节点的领导级平台上进行的。该结果将DeepDrivemd作为ML驱动的HPC模拟方案的高性能框架建立,该场景支持不同的MD仿真和ML后端,并通过改善当前计算能力来改善长度和时间尺度来实现新的科学见解。
translated by 谷歌翻译
The rapid development of remote sensing technologies have gained significant attention due to their ability to accurately localize, classify, and segment objects from aerial images. These technologies are commonly used in unmanned aerial vehicles (UAVs) equipped with high-resolution cameras or sensors to capture data over large areas. This data is useful for various applications, such as monitoring and inspecting cities, towns, and terrains. In this paper, we presented a method for classifying and segmenting city road traffic dashed lines from aerial images using deep learning models such as U-Net and SegNet. The annotated data is used to train these models, which are then used to classify and segment the aerial image into two classes: dashed lines and non-dashed lines. However, the deep learning model may not be able to identify all dashed lines due to poor painting or occlusion by trees or shadows. To address this issue, we proposed a method to add missed lines to the segmentation output. We also extracted the x and y coordinates of each dashed line from the segmentation output, which can be used by city planners to construct a CAD file for digital visualization of the roads.
translated by 谷歌翻译
Though impressive success has been witnessed in computer vision, deep learning still suffers from the domain shift challenge when the target domain for testing and the source domain for training do not share an identical distribution. To address this, domain generalization approaches intend to extract domain invariant features that can lead to a more robust model. Hence, increasing the source domain diversity is a key component of domain generalization. Style augmentation takes advantage of instance-specific feature statistics containing informative style characteristics to synthetic novel domains. However, all previous works ignored the correlation between different feature channels or only limited the style augmentation through linear interpolation. In this work, we propose a novel augmentation method, called \textit{Correlated Style Uncertainty (CSU)}, to go beyond the linear interpolation of style statistic space while preserving the essential correlation information. We validate our method's effectiveness by extensive experiments on multiple cross-domain classification tasks, including widely used PACS, Office-Home, Camelyon17 datasets and the Duke-Market1501 instance retrieval task and obtained significant margin improvements over the state-of-the-art methods. The source code is available for public use.
translated by 谷歌翻译
Time-of-flight (ToF) distance measurement devices such as ultrasonics, LiDAR and radar are widely used in autonomous vehicles for environmental perception, navigation and assisted braking control. Despite their relative importance in making safer driving decisions, these devices are vulnerable to multiple attack types including spoofing, triggering and false data injection. When these attacks are successful they can compromise the security of autonomous vehicles leading to severe consequences for the driver, nearby vehicles and pedestrians. To handle these attacks and protect the measurement devices, we propose a spatial-temporal anomaly detection model \textit{STAnDS} which incorporates a residual error spatial detector, with a time-based expected change detection. This approach is evaluated using a simulated quantitative environment and the results show that \textit{STAnDS} is effective at detecting multiple attack types.
translated by 谷歌翻译
Recently, automated co-design of machine learning (ML) models and accelerator architectures has attracted significant attention from both the industry and academia. However, most co-design frameworks either explore a limited search space or employ suboptimal exploration techniques for simultaneous design decision investigations of the ML model and the accelerator. Furthermore, training the ML model and simulating the accelerator performance is computationally expensive. To address these limitations, this work proposes a novel neural architecture and hardware accelerator co-design framework, called CODEBench. It is composed of two new benchmarking sub-frameworks, CNNBench and AccelBench, which explore expanded design spaces of convolutional neural networks (CNNs) and CNN accelerators. CNNBench leverages an advanced search technique, BOSHNAS, to efficiently train a neural heteroscedastic surrogate model to converge to an optimal CNN architecture by employing second-order gradients. AccelBench performs cycle-accurate simulations for a diverse set of accelerator architectures in a vast design space. With the proposed co-design method, called BOSHCODE, our best CNN-accelerator pair achieves 1.4% higher accuracy on the CIFAR-10 dataset compared to the state-of-the-art pair, while enabling 59.1% lower latency and 60.8% lower energy consumption. On the ImageNet dataset, it achieves 3.7% higher Top1 accuracy at 43.8% lower latency and 11.2% lower energy consumption. CODEBench outperforms the state-of-the-art framework, i.e., Auto-NBA, by achieving 1.5% higher accuracy and 34.7x higher throughput, while enabling 11.0x lower energy-delay product (EDP) and 4.0x lower chip area on CIFAR-10.
translated by 谷歌翻译
Point Cloud Registration is the problem of aligning the corresponding points of two 3D point clouds referring to the same object. The challenges include dealing with noise and partial match of real-world 3D scans. For non-rigid objects, there is an additional challenge of accounting for deformations in the object shape that happen to the object in between the two 3D scans. In this project, we study the problem of non-rigid point cloud registration for use cases in the Augmented/Mixed Reality domain. We focus our attention on a special class of non-rigid deformations that happen in rigid objects with parts that move relative to one another about joints, for example, robots with hands and machines with hinges. We propose an efficient and robust point-cloud registration workflow for such objects and evaluate it on real-world data collected using Microsoft Hololens 2, a leading Mixed Reality Platform.
translated by 谷歌翻译
Robots have been steadily increasing their presence in our daily lives, where they can work along with humans to provide assistance in various tasks on industry floors, in offices, and in homes. Automated assembly is one of the key applications of robots, and the next generation assembly systems could become much more efficient by creating collaborative human-robot systems. However, although collaborative robots have been around for decades, their application in truly collaborative systems has been limited. This is because a truly collaborative human-robot system needs to adjust its operation with respect to the uncertainty and imprecision in human actions, ensure safety during interaction, etc. In this paper, we present a system for human-robot collaborative assembly using learning from demonstration and pose estimation, so that the robot can adapt to the uncertainty caused by the operation of humans. Learning from demonstration is used to generate motion trajectories for the robot based on the pose estimate of different goal locations from a deep learning-based vision system. The proposed system is demonstrated using a physical 6 DoF manipulator in a collaborative human-robot assembly scenario. We show successful generalization of the system's operation to changes in the initial and final goal locations through various experiments.
translated by 谷歌翻译